Introduction

A comparison of JULES-ES-1p0 wave01 members against the original ensemble (wave00).

Wave 01 input parameter sets were picked using History matching to fall within Andy Wiltshires basic constraints on NBP, NPP, cSoil and cVeg stocks at the end of the 20th century. We use 300 of the 500 members, keeping back 2/5ths for emulator validation later.

We answer some basic questions.

What proportion of the new ensemble match AW’s constraints?

How good is a GP emulator? Does it get better overall with the new ensemble members added? In particular, does it get better for those members within the AW constraints?

Does the sensitivity analysis change?

Preliminaries

Load libraries, functions and data.

How many run failures were there?

There are no NAs but some relative humidity values are infinite. There are no “low NPP” ensemble members

## [1] 117464.6
## [1] FALSE
##      row col
## [1,] 140   9
## [2,] 232   9
## [3,] 249   9
## [4,] 300   9
## [1] Inf Inf Inf Inf
## [1] "rh_lnd_sum"

Ensemble behaviour in key (constraining) outputs.

Global mean for the 20 years at the end of the 20th Century. There is still a significant low bias on cVeg output.

### What proportion of models now fall within Andy’s constraints?

A third! Better than before, but still not great. Pointing at a significant model discrepency in cVeg

Of the 300 members of the wave01 ensemble, 100 pass Andy Wiltshire’s Level 2 constraints.

## [1] 100

Pairs plot of the inputs that pass the constraints with respect to the limits of the original ensemble.

Timeseries of mean carbon cycle properties over whole run.

Wave00 in blue and wave01 in red.

Plot Wave00 and Wave01 on top of one another.

Emulator fits

We hope that running the new ensemble gives us a better emulator, and allows us to rule out more input space. We particularly hope that the emulator is better for those members that are inside AW’s constraints.

First, we can look at the emulator errors in two cases: The level1a data (a basic carbon cycle), and then with the Wave01 data, which should have similar characteristics. (We should have eliminated really bad simulations, but wave01 is not constrained the data perfectly to be within AW constraints.)

Emulator fit list of level 1a ensemble

Remove an outlier from the new wave and build emulators

##      nbp_lnd_sum npp_nlim_lnd_sum    cSoil_lnd_sum     cVeg_lnd_sum 
##              583              213              317              312
##      nbp_lnd_sum npp_nlim_lnd_sum    cSoil_lnd_sum     cVeg_lnd_sum 
##              314              243              243              243

Found the outlier - looks like it’s 440

## integer(0)

Leave-one-out analyses of emulator prediction accuracy

The top row shows the leave-one-out prediction accuracy of the original wave00 ensemble, and the lower row the entire wave00 AND wave01 ensemble combined.

Emulator accuracy of members from wave 00 and wave 01 that pass level 2 (AW’s) constraints

We see that the error stats for some of the outputs from wave01 are worse, but there are many more ensemble members that lie within the constraints for wave 01.

“pmae” is “proportional mean absolue error”, which is the mean absolute error expressed as a percentage of the original (minimally constrained) ensemble range in that output.

Does the emulator improve is you look at only the 37 members that pass level 2 constraints in wave 00?

This gives us an idea of how good the emulator is where it really matters, and as the members are consistent, gives us a fairer idea of whether the emulators have improved with more members.

Good news is, the emulators are more accurate for wave01.

These leave-one-out prediction accuracy plots rank the ensemble members from largest underprediction to largest overprediction using the wave00 predictions. A perfect prediction would appear on the horizontal “zero” line.

Many of the wave01 predictions are closer to the horizontal line, and therefore more accurate predictions.

None of the predictions are outside the uncertainty bounds, which suggests they are overconservative (should be smaller).

Looking at the proportional mean absolute error (pmae), expressed in percent, we can see that it doesn’t improve much for the whole ensemble, but does improve significantly for the subset of ensemble members that fall within AW’s constraints from the first ensemble (marked "_sub").

pmae_wave00 <- lapply(loostats_km_Y_level1a, FUN = function(x) x$pmae )
pmae_wave01 <- lapply(loostats_km_Y_level1a_wave01, FUN = function(x) x$pmae )

pmae_wave00_sub <- lapply(loostats_km_Y_level1a_sub, FUN = function(x) x$pmae )
pmae_wave01_sub <- lapply(loostats_km_Y_level1a_wave01_sub, FUN = function(x) x$pmae )

pmae_table <- cbind(pmae_wave00, pmae_wave01, pmae_wave00_sub, pmae_wave01_sub)

print(pmae_table)
##      pmae_wave00 pmae_wave01 pmae_wave00_sub pmae_wave01_sub
## [1,] 4.979909    4.927674    7.243207        4.913075       
## [2,] 4.282064    4.007752    4.804386        4.085869       
## [3,] 3.597215    3.7899      4.555873        3.834191       
## [4,] 4.242       4.516206    4.817045        3.226527

Comparing atmospheric growth in wave00, wave01 and observations

Andy asks - what constraint does that give us in cumulative NBP?

Eddy suggests measuring cumulative NBP against atmospheric growth rate

Calculate the atmospheric growth rate of 1984- 2013 using a simple linear fit